Gå til innhold

Lov&Data

4/2025: Artikler
11/12/2025

The COMPAS case´s impact on the EU´s AI Act

Av Anna Stensrud, nyutdannet jurist, med en tverrfaglig studiebakgrunn fra internasjonale studier og historie, har de siste årene jobbet som journalist, og har nå inntatt rollen som personvernsrådgiver i DSS.

This article addresses key findings from my Master thesis, which examined how the controversy of the COMPAS recidivism algorithm in the US directly and indirectly shaped requirements for AI used by law enforcement and judiciary within the EU AI Act.(1)Stensrud (2025).

Illustrasjon: Colourbox.com

First, a brief explanation of the COMPAS revelation will be given. From now on it will be described as “the COMPAS case”, understood as a case not in the legal sense, but as a concrete case study. Furthermore, a short introduction to the AI Act (AIA) is presented. Due to space limitation, the paper will not dig into the academic debate sparked by the COMPAS case and the regulatory works leading up to the AIA, where the COMPAS case was referenced.(2)See Stensrud (2025) Chapter 4 and 5 for this research. Instead, the direct impact of the COMPAS case found through references in the AIA´s Impact Assessment part 1 and 2 is at the center of this article. After presenting the findings, the pros and cons of the transnational influence of regulatory approaches will be discussed. The aim of the article is to illustrate how a local challenge, rooted in a particular jurisdiction, informed regulatory efforts in an entirely different legal and cultural context – and to question the fruitfulness of this influence.

COMPAS – the not so trustworthy AI

The COMPAS algorithm, used in several US court systems, was shown in ProPublica’s landmark 2016 study to disproportionately label Black defendants as high risk despite equivalent reoffending rates—a finding that sparked new global debates about algorithmic bias and trust in AI-driven judicial tools.(3)Angwin et al. (2016). The investigative research looked at over “10,000 criminal defendants in Broward County, Florida, and compared their predicted recidivism rates with the rate that actually occurred over a two-year period[…and] compared the recidivism risk categories predicted by the COMPAS tool to the actual recidivism rates.”(4)Larson et al. (2016). They found that Black defendants were disproportionately classified as high-risk for reoffending, while White defendants were more likely to be incorrectly categorized as low-risk.(5)Angwin et al. (2016) and Larson et al. (2016). This bias was rooted in the algorithm’s reliance on historical data that reflected systemic inequalities, perpetuating discrimination through automated decision-making processes.

The defendants had to answer a questionnaire asking about employment, education, family background, friends and crime in their neighborhood.(6)Angwin et al. (2016) and Northpointe (2011). These are variables which combined could lead to racial bias according to scholars.(7)Baker (2025). “A guy who has molested a small child every day for a year could still come out as a low risk because he probably has a job. […] Meanwhile, a drunk guy will look high risk because he’s homeless,” a Superior Court Judge commented about COMPAS.(8)Angwin et al. (2016). Additionally, the opaque nature of COMPAS’ “black box” algorithm raised concerns about transparency and accountability, as neither defendants nor legal professionals could understand or challenge its outputs.(9)The term “black box” in the context of AI and machine learning refers to systems whose internal workings are opaque or not easily understandable, even to their creators. This opacity undermines due process rights and the ability to challenge potentially erroneous or biased assessments.

The COMPAS case is a helpful case study in several ways. For one, it is a tool used by law enforcement and judges in the US. It revealed the challenges the use of AI imposes in in these fields. Secondly, there is evidence of the COMPAS case influencing specific parts of the AIA, as this article will go to show.

The AIA and the protection of fundamental rights

On the other side of the Atlantic Ocean, the work on trustworthy AI begun in the European Union (EU).(10)2015/2103(INL) point 16. In 2023 this would culminate into the pioneering AI-regulation the “Artificial Intelligence Act” (Regulation (EU) 2024/1689). The AIA is significant, because it tries to adapt to present and future society´s use of AI, based on challenges that have already occurred and predictions of what might come. The shift from ethical guidelines to binding regulations came when Ursula von der Leyen became president of the European Commission in 2019. Under her rule, the Commission highlighted the need for a unified EU legislation on AI to protect fundamental rights.(11)European Commission (2020) “Whitepaper on AI” p. 2. This marks an important shift within AI governance in the EU, from voluntary ethical guidelines to protecting fundamental rights, which are foundational to the EU through the European Convention on Human Rights (ECHR).(12)See AI HLEG (2019) “Ethics Guidelines for Trustworthy AI”. After debates among the EU leaders in October 2020, the Commission formally proposed the AIA on April 21st 2021, accompanied by an Impact Assessment, which is a central source for this article. After intense trialogue discussions in December 2023, a political agreement was reached, and the Act was adopted by the European Parliament on March 13th 2024, approved by the Council on May 21st 2024, and entered into force on August 1st 2024.

The heart of the Act lies in Chapter III, which focuses on high-risk AI systems.(13)AIA, Article 6-49. Chapter III is subdivided into 5 sections that detail how high-risk systems are classified (Section 1), the strict requirements they must meet (Section 2), and the obligations placed on various actors in the AI value chain, such as providers, deployers, importers, and distributors (Section 3). It also establishes the roles of notifying authorities and notified bodies (Section 4) and standards, conformity assessments, certification, and registration processes (Section 5). This structure aims to ensure that the most potentially harmful AI applications are subject to robust ex-ante scrutiny before entering the market. Chapter III, Section 2 is central to this thesis´ analysis. Section 2 details the essential requirements that high-risk AI systems must meet before being placed on the market or put into service.(14)AIA, Article 8-15. These requirements include e.g. the need for a robust risk management system in Article 9, high standards for data quality and governance in Article 10, thorough technical documentation in Article 11, and effective record-keeping in Article 12. Additionally, providers must ensure transparency and provide adequate information to deployers after Article 13, implement human oversight measures as stated in Article 14, and guarantee accuracy, robustness, and cybersecurity according to Article 15. These obligations are designed to make sure that high-risk AI systems operate safely and reliably, with clear accountability throughout their lifecycle. Article 10 and 14 are especially important to mitigating bias in AI systems like COMPAS and several studies have also connected the COMPAS case to these articles,(15)Carnat (2024), Chroust (2025) and van Dijck (2022). an argument backed by the findings in this paper.

The findings in the Impact Assessment

The European Commission conducts impact assessments for its legislative and non-legislative initiatives that are expected to have significant economic, environmental, or social impacts – like the AIA.(16)Robertson (2008) p. 17. The Impact Assessment accompanying the AIA therefore gives us valuable insight “behind the scenes” of the Act. We find the COMPAS case listed in both the first and the second part of the Impact Assessment. Part 1 identifies bias and discrimination as significant risks posed by AI systems, stating that discriminatory AI systems used in judiciary or law enforcement may “lead to broader societal consequences, reinforcing existing or creating new forms of structural discrimination and exclusion”.(17)European Commission (2021) part 1/2 p. 14, 19-20. Further on in part 1, the European Commission outlines a “baseline scenario” in chapter 5.1. Here they state that in the absence of the AIA the risks identified with e.g. discriminatory AI, would remain unaddressed”.(18)European Commission (2021) part 1/2 p. 37-29. The “baseline scenario” described in the Impact Assessment therefore reflects the policy landscape before the AIA; a patchwork of general product safety rules, sectoral laws, and voluntary guidelines, with no harmonized, risk-based EU framework for AI.(19)For example the Machinery Directive, General Product Safety Directive, the forthcoming Digital Services Act, voluntary industry codes, and soft law. COMPAS is referenced in this part of the Impact Assessment, as one of four examples “where claims of discrimination have already led to pressure from public opinion.”(20)European Commission (2021) part 1/2 p. 38. The COMPAS case is referenced together with these three cases: facial recognition discrimination, Amazon´s hiring model and the ´sexist´ Apple card.(21)European Commission (2021) part 1/2 p 38. All the examples listed had their origin in the US, which underscores the transnational influence of American cases of claimed discrimination.

The Impact Assessment cites the COMPAS case as an example of how algorithmic tools can perpetuate discrimination and undermine trust. It warns that, without EU-level action, member states would likely adopt divergent national rules in response to public outcry over AI harms, leading to market fragmentation and inconsistent protection of fundamental rights.(22)European Commission (2021) part 1/2 p 38. The Act is designed to harmonize rules across the EU for high-risk AI systems, ensuring a consistent level of protection and legal certainty for both providers and users. This harmonization is a key departure from the baseline scenario, where such fragmentation is a likely outcome. The baseline scenario relies heavily on voluntary codes and ethical guidelines, such as those developed by the AI HLEG.(23)AI HLEG (2019). However, the Impact Assessment notes that these mechanisms lack enforceability and public oversight. In contrast, the AIA establishes binding obligations for high-risk AI systems, with oversight by national authorities and the European Commission, thus moving beyond the limitations of soft law. Under the baseline, discriminatory issues would be addressed only indirectly, if at all, through general anti-discrimination law or sectoral regulation, not through targeted AI legislation. The Act, through Annex III number 6(d)-(e), directly addresses these concerns by classifying AI systems used for risk assessment and profiling in law enforcement as high-risk, thereby subjecting them to strict requirements for e.g. transparency in Article 13, human oversight in Article 14, and bias mitigation in Article 10.(24)AIA, Article 8-15. This regulatory approach is arguably a direct response to the shortcomings exposed by cases like COMPAS, which the Impact Assessment highlights as a justification for moving beyond the baseline.

The Impact Assessment´s part 2 show that the high-risk categorization of the AIA was based on 132 use cases, stakeholder feedback and alignment with existing EU law such as the GDPR, product safety directives, and fundamental rights frameworks.(25)European Commission (2021) part 2/2 Annex 2, 4 and 5. A use case is an example of how an AI system is used to do a specific job or help with a certain task, for example AI systems used for recruitment or to assist judicial decisions. The COMPAS case heavily impacted one of the high-risk use cases studied by the Commission.(26)European Commission (2021) part 2/2 p. 46. The Impact Assessment does not speak of specific articles or annexes of the AIA, as they were not yet settled upon in 2021. Annex III is therefore not explicitly mentioned by name in the Impact Assessment. However, Section 5.4 in the second part of the Impact Assessment cover high-risk AI systems that are not covered by sectoral product safety legislation, and have mainly implications with fundamental rights.(27)European Commission (2021) part 2/2 p. 40. The high-risk AI systems considered in 5.4 are what came to be listed in Annex III; 1) biometrics, 2) critical infrastructure, 3) education and vocational training, 4) employment, 5) access to essential services, 6) law enforcement, 7) migration and 8) administration of justice and democratic processes.(28)European Commission (2021) part 2/2 p. 41-46 and AIA, Annex III number 1-8. It is in the Impact Assessment part 2´s Annex 5 section 5.4, we find the COMPAS case connected to one of the use cases.(29)European Commission (2021) part 2/2 p. 46. The COMPAS case is listed as one of the five evidence sources under “AI systems used to assist judicial decisions, unless for ancillary tasks.”(30)European Commission (2021) part 2/2 p. 46. However, the impact of the COMPAS case is also evident in three of the four other sources listed. One of them being the CoE Charter written by the CEPEJ.(31)CEPEJ (2018). Another source is the Supreme Court case State v. Loomis, where the COMPAS case´s findings was a matter of debate.(32)State v. Loomis, 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S. Ct. 2290 (2017). See paragraph 63 for one of several references to the COMPAS case. Two specific pages of a fourth source named “Algorithms and human rights” published by the CoE in 2017 is also listed. Within these two pages the COMPAS case is again used as an example of judicial bias.(33)European Commission (2021) part 2/2 p. 46 and CoE (2017) p. 12. In fact, there is only one out of five sources listed which the COMPAS case is not a part of.(34)The reference is to a decision of the French Conseil Constitutionnel of June 12th 2018, Decision no 2018-765 DC. The potential harms listed under “AI systems used to assist judicial decisions, unless for ancillary tasks” are “[i]ntense interference with a broad range of fundamental rights (e.g. effective remedy and fair trial, non-discrimination, right to defence, presumption of innocence, right to liberty and security, human dignity as well as all rights granted by Union law that require effective judicial protection)” and “systemic risk to rule of law and freedom”.(35)European Commission (2021) part 2/2 p. 46. Five “especially relevant indicative criteria” are also listed;

  • Increased possibilities for use by judicial authorities in the EU

  • Potentially very severe impact and harm for all rights dependent on effective judicial protection

  • High potential to scale and adversely impact a plurality of persons or groups (due to large number of affected individuals)

  • High degree of dependency (due to inability to opt out) and high degree of vulnerability vis-à-vis judicial authorities)

  • Indication of harm (high probability of historical biases in past data used as training data, opacity).(36)European Commission (2021) part 2/2 p. 46.

The COMPAS case´s influence here is evident, directly and indirectly through three of the four other sources. Not only did the COMPAS case reveal the severe personal consequences of algorithmic bias, increasing systemic bias and racism. It also highlighted how AI could weaken the trustworthiness of the justice system, with the lack of competent human oversight and overreliance on AI tools in decision-making. The potential harms and especially relevant indicative criteria show that the Commission has taken the lessons learned from the COMPAS case very seriously, which is also reflected in specific articles of the AIA´s Chapter III, Section 2. The Commission also states that “AI systems used for judicial decision-making, in the law enforcement sector and in the area of asylum and migration should comply with standards relating to increased transparency, traceability and human oversight which will help to protect the right to fair trial, the right to defense and the presumption of innocence.”(37)European Commission (2021) part 2/2 p. 48 and ECHR, Article 47 and 48. This links what came to be Annex III number 6 of the AIA, to requirements of transparency in Article 13, traceability in Article 12 and human oversight in Article 14.

Transnational Influence – a blessing or a curse?

Although the COMPAS case´simpact on Annex III number 6(d) is evident, the Act’s structure and content are shaped primarily by the EU’s internal policy process, legal precedents, and stakeholder consultations, with the COMPAS case serving as an illustrative example rather than a foundational influence. The COMPAS case is deeply embedded in the context of the US criminal justice system, which differs significantly from European systems in terms of legal standards, data practices, and the role of risk assessment tools. While the study is often cited as a cautionary tale, its specific findings may not be directly transferable to the European context. The EU’s high-risk classification for AI in law enforcement and justice is based on broader concerns about power imbalances and fundamental rights, not solely or primarily on cases like COMPAS. Nevertheless, the sum of cases like the COMPAS case, together pave the way for new policies. The ProPublica publication is one of many international examples considered in the global debate on algorithmic bias, informing one of 132 use cases. The AIA’s provisions on bias, transparency, and human oversight reflect a synthesis of many sources, with no single study or case alone, being decisive, but several being influential. The references to COMPAS in consultations and assessments are part of a broader pattern of drawing on international experiences to inform, but not dictate, the EU policy.(38)AI HLEG (2018).

By framing regulation around a case like COMPAS, policymakers might overlook EU-specific risks, while overregulating areas where existing safeguards already mitigate harm.

Scholars note that the EU’s approach to AI regulation is intentionally distinct from US models, thus aiming for a “Brussels Effect” where the EU sets global standards through its own regulatory philosophy.(39)Ceimia (2023) p. 2-4 and 11. In this way, the transnational influence might go “full circle” - from the EU being influenced by AI bias in the US, to impacting companies through the AIA. Europe has been the most important market for many US tech-companies such as Google.(40)Bradford (2020) p. 27. This means that American AI giants and SMEs also must adapt to European regulations to take part in the lucrative market, possibly changing the overall procedures of AI developers.

Given the stark differences in legal, cultural, and institutional contexts between the US and Europe the COMPAS case might not be transferable to the European context. The use of risk assessment tools in Europe are not widespread either, but the Impact Assessment´s part 2 lists the “increased possibilities for use by judicial authorities in the EU.”(41)European Commision (2021) part 2/2 p. 46. There is for example a Harm Assessment Risk Tool (HART) in use in the UK already.(42)Barnes et al. (2018) p. 33. The AI tool was developed for helping police make custodial decisions, but the researchers are clear; it is for guidance only, and it needs to be fed updated data and be under constant scrutiny.(43)Barnes et al. (2018) p. 33. The EU countries Germany and the Netherlands prioritize rehabilitation over risk prediction, often integrated into broader social service frameworks.(44)Subramanian and Shames (2013) p. 7-14. In the US, COMPAS operates within a legal framework with weak data protection laws, limited transparency requirements for proprietary algorithms, and judicial practices that more often defer to algorithmic outputs, as seen in cases like State v. Loomis. By contrast, the EU’s regulatory landscape, anchored in the GDPR Article 22´s restrictions on automated decision-making and the AIA´s prohibition in Article 5(1)(d) and high-risk regulation in Annex III number 6(d), reflects a fundamentally different approach.(45)AIA, Article 5(1)(d) prohibits AI systems that assess criminal risk based solely on profiling or personality traits, with exceptions only if used to support human judgment. The EU prioritizes ex-ante safeguards, human oversight, and procedural fairness, reducing the likelihood of a European COMPAS controversy. For example, AI systems in EU law enforcement must now undergo conformity assessments, provide technical documentation, and ensure human intervention, all of which address COMPAS’ core flaws; opacity, bias, and flawed oversight.(46)AIA, Article 43, 11 and 14. Nevertheless, relying on COMPAS as a justification for EU regulation on recidivism tools used in law enforcement, risks overemphasizing a foreign example and interpreting evidence to fit preexisting concerns. By framing regulation around a case like COMPAS, policymakers might overlook EU-specific risks, while overregulating areas where existing safeguards already mitigate harm.

Table of reference

Laws and regulations:

  1. ECHR Convention for the Protection of Human Rights and Fundamental Freedoms, Rome 4 November 1950.

  2. Legislative initiative procedure 2015/2103(INL) European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics [Civil Law Rules on Robotics].

  3. Regulation (EU) 2024/1698 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence [Artificial Intelligence Act].

  4. Regulation (EU) 2016/679 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [General Data Protection Regulation].

Court cases:

  1. Judgement of Wisconsin Supreme Court of 13 July 2016, State v. Loomis, 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S. Ct. 2290 (2017).

  2. Judgement of French Conseil Constitutionnel of 12 June 2018, Decision no. 2018-765 DC.

Reports and documents:

  1. AI HLEG. Draft Ethics guidelines for trustworthy AI. Brussels: 2018. Available for download through: https://digital-strategy.ec.europa.eu/en/library/draft-ethics-guidelines-trustworthy-ai

  2. AI HLEG. Ethics guidelines for trustworthy AI. Brussels: 2019. Available for download through: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

  3. CEPEJ. European ethical Charteron the use of Artificial Intelligence in judicial systems and their environment. Strasbourg: CoE, 2018. https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c

  4. Ceimia. A Comparative Framework for AI Regulatory Policy. Montréal: 2023. https://ceimia.org/wp-content/uploads/2023/05/a-comparative-framework-for-ai-regulatory-policy.pdf

  5. European Commission. White Paper on Artificial Intelligence - A European approach to excellence and trust. COM(2020) 65 final. Brussels: 2020. https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

  6. European Commission. Commission Staff Working Document Impact Assessment. SWD(2021) 84 final (part 1/2). Brussels: 2021. https://eur-lex.europa.eu/resource.html?uri=cellar:0694be88-a373-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF

  7. European Commission. Commission Staff Working Document Impact Assessment Annexes. SWD(2021) 84 final (part 2/2). Brussels: 2021. https://eur-lex.europa.eu/resource.html?uri=cellar:0694be88-a373-11eb-9585-01aa75ed71a1.0001.02/DOC_2&format=PDF

  8. Northpointe. Risk Assessment. 2011. https://embed.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE/

  9. Subramanian, Ram and Alison Shames. Sentencing and Prison Practices in Germany and the Netherlands: Implications for the United States. New York: VERA Institute of Justice, 2013. https://www.prisonpolicy.org/scans/vera/european-american-prison-report-v3.pdf

Literature:

  1. Angwin, Julia, Jeff Larson, Surya Mattu and Lauren Kirchner. “Machine bias.” ProPublica, May 23rd 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  2. Baker, Steven. “Ensuring Procedural Fairness in AI-Driven Criminal Sentencing: A Focus on Transparency and The Right to Explanation.” NHSJCS, 2025. https://nhsjcs.com/2025/02/27/ensuring-procedural-fairness-in-ai-driven-criminal-sentencing-a-focus-on-transparency-and-the-right-to-explanation/

  3. Barnes, Geoffrey, Lawrence Sherman and Sheena Urwin. “Needles and haystacks: AI in criminology.” Research Horizons no. 35 (2018) p. 32-33. https://www.cam.ac.uk/system/files/issue_35_research_horizons_new.pdf

  4. Boskovic, Marina M. Matic. “IMPLICATIONS OF EU AI REGULATION FOR CRIMINAL JUSTICE.” Regional Law Review (2024) p. 111-120. DOI:10.56461/iup_rlrc.2024.5.ch8

  5. Bradford, Anu. The Brussels Effect: How the European Union Rules the World. Oxford: Oxford University Press, 2020.

  6. Carnat, Irina. “Addressing the risks of generative AI for the judiciary: The accountability framework(s) under the EU AI Act.” Computer Law & Security Review vol. 55 (2024) 106067.

  7. Chroust, Tomas. “The AI Act requires human oversight.” Bearingpoint, 2025. https://www.bearingpoint.com/en/insights-events/insights/the-ai-act-requires-human-oversight/

  8. Larson, Jeff, Surya Mattu, Lauren Kirchner and Julia Angwin. “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica, May 23rd 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

  9. Robertson, Craig. “Impact Assessment in the European Union.” EIPASCOPE vol 2 (2008) p. 17-20.

  10. Stensrud, Anna. Bias Beyond Borders: Decoding Transnational Influence in the EU´s AI Act - An analysis of how US experiences informed the AI Act, and how it regulates recidivism tools like COMPAS [Master thesis]. Oslo: University of Oslo, 2025.

  11. The Decision Lab. “Why do we think some things are related when they aren’t?” N.d. https://thedecisionlab.com/biases/illusory-correlationLast used May 22nd 2025.

  12. Van Dijck, G. “Predicting Recidivism Risk Meets AI Act.” European Journal on Criminal Policy and Research vol. 28, no. 3 (2022) p. 407-423. https://doi.org/10.1007/s10610-022-09516-8

  13. Whitehead, Orlando. “€250 on the spot fines for bicycle thieves.” The Brussels Times, December 27th 2021. https://www.brusselstimes.com/199457/e250-on-the-spot-fines-for-bicycle-thieves

Noter

  1. Stensrud (2025).
  2. See Stensrud (2025) Chapter 4 and 5 for this research.
  3. Angwin et al. (2016).
  4. Larson et al. (2016).
  5. Angwin et al. (2016) and Larson et al. (2016).
  6. Angwin et al. (2016) and Northpointe (2011).
  7. Baker (2025).
  8. Angwin et al. (2016).
  9. The term “black box” in the context of AI and machine learning refers to systems whose internal workings are opaque or not easily understandable, even to their creators.
  10. 2015/2103(INL) point 16.
  11. European Commission (2020) “Whitepaper on AI” p. 2.
  12. See AI HLEG (2019) “Ethics Guidelines for Trustworthy AI”.
  13. AIA, Article 6-49.
  14. AIA, Article 8-15.
  15. Carnat (2024), Chroust (2025) and van Dijck (2022).
  16. Robertson (2008) p. 17.
  17. European Commission (2021) part 1/2 p. 14, 19-20.
  18. European Commission (2021) part 1/2 p. 37-29.
  19. For example the Machinery Directive, General Product Safety Directive, the forthcoming Digital Services Act, voluntary industry codes, and soft law.
  20. European Commission (2021) part 1/2 p. 38.
  21. European Commission (2021) part 1/2 p 38.
  22. European Commission (2021) part 1/2 p 38.
  23. AI HLEG (2019).
  24. AIA, Article 8-15.
  25. European Commission (2021) part 2/2 Annex 2, 4 and 5. A use case is an example of how an AI system is used to do a specific job or help with a certain task, for example AI systems used for recruitment or to assist judicial decisions.
  26. European Commission (2021) part 2/2 p. 46.
  27. European Commission (2021) part 2/2 p. 40.
  28. European Commission (2021) part 2/2 p. 41-46 and AIA, Annex III number 1-8.
  29. European Commission (2021) part 2/2 p. 46.
  30. European Commission (2021) part 2/2 p. 46.
  31. CEPEJ (2018).
  32. State v. Loomis, 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S. Ct. 2290 (2017). See paragraph 63 for one of several references to the COMPAS case.
  33. European Commission (2021) part 2/2 p. 46 and CoE (2017) p. 12.
  34. The reference is to a decision of the French Conseil Constitutionnel of June 12th 2018, Decision no 2018-765 DC.
  35. European Commission (2021) part 2/2 p. 46.
  36. European Commission (2021) part 2/2 p. 46.
  37. European Commission (2021) part 2/2 p. 48 and ECHR, Article 47 and 48.
  38. AI HLEG (2018).
  39. Ceimia (2023) p. 2-4 and 11.
  40. Bradford (2020) p. 27.
  41. European Commision (2021) part 2/2 p. 46.
  42. Barnes et al. (2018) p. 33.
  43. Barnes et al. (2018) p. 33.
  44. Subramanian and Shames (2013) p. 7-14.
  45. AIA, Article 5(1)(d) prohibits AI systems that assess criminal risk based solely on profiling or personality traits, with exceptions only if used to support human judgment.
  46. AIA, Article 43, 11 and 14.
Anna Stensrud
Anna Stensrud